Skip to main content
Version: 12.10.0

Exposing PostgreSQL Metrics to Prometheus with Flask

A Python-based Proof of Concept

Purpose

This proof of concept demonstrates how to expose critical database metrics to a monitoring system.

Objectives

To create a seamless pipeline for real-time database monitoring, which is essential for maintaining system health and performance

Technologies involved

PostgreSQL, Prometheus, Python Flask

Overview of Key Technologies

PostgreSQL: An advanced open-source relational database. Known for its robustness, scalability, and support for complex queries

Prometheus: A powerful monitoring and alerting toolkit. Uses a time-series database to collect and process metrics. Features a flexible query language called PromQL.

Python Flask: A lightweight and versatile web framework for Python. Ideal for creating web services with minimal setup. Used in our POC to serve the metrics endpoint for Prometheus.

Prerequisites for the POC

PostgreSQL Database: A running instance of PostgreSQL with access credentials. At least one table with data to monitor.

Python Environment: Python 3.x installed on the system. Access to pip for installing packages.

Flask Web Framework: Knowledge of Flask basics for setting up a web service.

Prometheus Server: A configured Prometheus server ready to scrape metrics. Basic understanding of Prometheus configuration and query language (PromQL).

Network Accessibility: Ensure the Flask application is reachable by the Prometheus server.

Python Dependencies

Flask: A micro web framework for building web applications in Python. Serves as the backbone for our metrics endpoint.

psycopg2-binary: A PostgreSQL adapter for Python. Used to connect to and interact with our PostgreSQL database.

prometheus_client: The official Python client for Prometheus. Provides necessary tools to define and expose custom metrics.

Installation Command:

pip install Flask psycopg2-binary prometheus_client

Flask Application Setup

Initialize Flask App: Create a new Python file, e.g., app.py. Import Flask and initialize the app object:

from flask import Flask
app = Flask(__name__)

Define Routes: Set up a route for the metrics endpoint:

@app.route('/metrics')
def metrics():
# Logic to collect and expose metrics
pass

Run the Application: Add the code to run the app if it's the main program:

if __name__ == '__main__':
app.run(host='0.0.0.0', port=5000)

Database Connection

Setting Up Database Credentials:

Store PostgreSQL connection parameters in a configuration dictionary:

DB_SETTINGS = {
'dbname': 'your_dbname',
'user': 'your_username',
'password': 'your_password',
'host': 'your_host',
'port': 'your_port'
}
Establishing Connection:

Use psycopg2 to connect to the PostgreSQL database:

import psycopg2
def get_db_connection():
return psycopg2.connect(**DB_SETTINGS)
Querying the Database:

Execute SQL queries to retrieve the necessary metrics:

def get_row_count(table_name):
conn = get_db_connection()
cur = conn.cursor()
cur.execute(f"SELECT COUNT(*) FROM {table_name};")
row_count = cur.fetchone()[0]
cur.close()
conn.close()
return row_count

Prometheus Metrics

Prometheus Client Library: Utilize the prometheus_client library to define and expose metrics. Create a metrics registry to collect and serve the data.

Defining Metrics: Define a Gauge metric to track the number of rows in a table:

from prometheus_client import Gauge, CollectorRegistry
registry = CollectorRegistry()
row_count_gauge = Gauge('postgres_row_count', 'Number of rows in the table', ['table'], registry=registry)

Updating Metrics: Update the Gauge metric within the /metrics endpoint function:

@app.route('/metrics')
def metrics():
# Update the gauge with the current row count
row_count_gauge.labels('your_table_name').set(get_row_count('your_table_name'))
# Generate and return the latest metrics
return Response(generate_latest(registry), mimetype='text/plain')

Metrics Endpoint

Exposing Metrics: The Flask application includes an endpoint specifically for Prometheus to scrape. The /metrics endpoint responds with the current state of the monitored metrics.

Endpoint Implementation: The endpoint function collects database metrics and uses prometheus_client to format them:

from flask import Response
from prometheus_client import generate_latest
@app.route('/metrics')
def metrics():
# Logic to collect metrics from the database
# Update Prometheus metrics
# ...
# Return the metrics in a format Prometheus can understand
return Response(generate_latest(registry), mimetype='text/plain')

Running the Flask Application

Starting the Server: Execute the Flask application by running the app.py script:

python app.py

Accessing the Application: The Flask server starts and listens on the specified host and port. By default, the application will be accessible at: http://localhost:5000/metrics

Verifying the Endpoint: Use a web browser or a tool like curl to verify the /metrics endpoint: curl http://localhost:5000/metrics

Ready for Prometheus: Once verified, the Flask application is ready to be scraped by the Prometheus server.

Prometheus Configuration

Configuring the Prometheus Server: Add a new job to the Prometheus configuration file (prometheus.yml) to scrape metrics from the Flask application. Example configuration:

scrape_configs:
- job_name: 'flask-app'
static_configs:
- targets: ['<flask_app_host>:5000']

Reloading Configuration: After updating the configuration file, reload Prometheus to apply the changes. This can often be done without restarting the server, depending on your setup.

Monitoring the Flask App: Prometheus will now periodically scrape the /metrics endpoint of the Flask application. The collected metrics will be available for querying and alerting within Prometheus.

Conclusion

Recap of Key Points: We explored how to expose PostgreSQL metrics using a Python Flask application. Demonstrated the setup of a Flask server with a /metrics endpoint for Prometheus scraping. Utilized the prometheus_client library to define and update Prometheus metrics. Configured Prometheus to scrape the Flask application and collect database metrics.

Potential for Expansion: This POC serves as a foundation that can be expanded with additional metrics and enhanced with security features. Can be integrated into a larger monitoring and alerting framework.

Importance for DevOps: Real-time monitoring is crucial for maintaining system health and performance. The integration of database metrics into Prometheus aids in proactive issue resolution and system optimization.

Next Steps: Consider deploying the Flask application using a production-ready server. Explore advanced Prometheus configurations and alerting rules. Evaluate the performance and scalability of the solution.